7 research outputs found

    Characterisation of crosstalk defects in submicron CMOS VLSI interconnects

    Get PDF
    The main problem addressed in this research work is a crosstalk defect, which is defined as an unexpected signal change due to the coupling between signals or power lines. Here its characteristic under 3 proposed models is investigated to find whether such a noise could lead to real logic faults in IC systems. As a result, mathematical analysis for various bus systems was established, with 3 main factors found to determine the amount of crosstalk: i) how the input buffers are sized; ii) the physical arrangements of the tracks; and iii) the number of switching tracks involved. Minimum sizes of the width and separation lead to the highest crosstalk while increasing in the length does not contribute much variation. Higher level of crosstalk is also found in higher metal layers due mainly to the reduced capacitance to the substrate. The crosstalk is at its maximum when the track concerned is the middle track of a bus connected to a weak buffer while the other signal lines are switching. From this information, the worse-case analysis for various bus configurations is proposed for 0.7, 0.5 and 0.35 µ CMOS technologies. For most of conventional logic circuits, a crosstalk as large as about a half of the supply voltage is required if a fault is to occur. For the buffer circuits the level of crosstalk required depends very much on the transition voltage, which is in turn controlled by the sizing of its n and p MOS transistors forming the buffer. It is concluded that in general case if crosstalk can be kept to be no larger that 30% of the supply voltage the circuit can be said to be very reliable and virtually free from crosstalk fault. Finally test structures are suggested so that real measurements can be made to verify the simulation result

    EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation

    Get PDF
    Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN) to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE) using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA) is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers

    Comparison of EEG measurement of upper limb movement in motor imagery training system

    No full text
    Abstract Background One of the most promising applications for electroencephalogram (EEG)-based brain computer interface is for stroke rehabilitation. Implemented as a standalone motor imagery (MI) training system or as part of a rehabilitation robotic system, many studies have shown benefits of using them to restore motor control in stroke patients. Hand movements have widely been chosen as MI tasks. Although potentially more challenging to analyze, wrist and forearm movement such as wrist flexion/extension and forearm pronation/supination should also be considered for MI tasks, because these movements are part of the main exercises given to patients in conventional stroke rehabilitation. This paper will evaluate the effectiveness of such movements for MI tasks. Methods Three hand and wrist movement tasks which were hand opening/closing, wrist flexion/extension and forearm pronation/supination were chosen as motor imagery tasks for both hands. Eleven subjects participated in the experiment. All of them completed hand opening/closing task session. Ten subjects completed two MI task sessions which were hand opening/closing and wrist flexion/extension. Five subjects completed all three MI tasks sessions. Each MI task comprised 8 sessions spanning a 4 weeks period. For classification, feature extraction based on common spatial pattern (CSP) algorithm was used. Two types were implemented, one with conventional CSP (termed WB) and one with an increase number of features achieved by filtering EEG data into five bands (termed FB). Classification was done by linear discriminant analysis (LDA) and support vector machine (SVM). Results Eight-fold cross validation was applied on EEG data. LDA and SVM gave comparable classification accuracy. FB achieved significantly higher classification accuracy compared to WB. The accuracy of classifying wrist flexion/extension task were higher than that of classifying hand opening/closing task in all subjects. Classifying forearm pronation/supination task achieved higher accuracy than classifying hand opening/closing task in most subjects but achieved lower accuracy than classifying wrist flexion/extension task in all subjects. Significant improvements of classification accuracy were found in nine subjects when considering individual sessions of experiments of all MI tasks. The results of classifying hand opening/closing task and wrist flexion/extension task were comparable to the results of classifying hand opening/closing task and forearm pronation/supination task. Classification accuracy of wrist flexion/extension task and forearm pronation/supination task was lower than those of hand movement tasks and wrist movement tasks. Conclusion High classification accuracy of the three MI tasks support the possibility of using EEG-based stroke rehabilitation system with these movements. Either LDA or SVM can equally be chosen as a classifier since the difference of their accuracies is not statistically significant. Significantly higher classification accuracy made FB more suitable for classifying MI task compared to WB. More training sessions could potentially lead to better accuracy as evident in most subjects in this experiment

    Automatic Speech Discrimination Assessment Methods Based on Event-Related Potentials (ERP)

    No full text
    Speech discrimination is used by audiologists in diagnosing and determining treatment for hearing loss patients. Usually, assessing speech discrimination requires subjective responses. Using electroencephalography (EEG), a method that is based on event-related potentials (ERPs), could provide objective speech discrimination. In this work we proposed a visual-ERP-based method to assess speech discrimination using pictures that represent word meaning. The proposed method was implemented with three strategies, each with different number of pictures and test sequences. Machine learning was adopted to classify between the task conditions based on features that were extracted from EEG signals. The results from the proposed method were compared to that of a similar visual-ERP-based method using letters and a method that is based on the auditory mismatch negativity (MMN) component. The P3 component and the late positive potential (LPP) component were observed in the two visual-ERP-based methods while MMN was observed during the MMN-based method. A total of two out of three strategies of the proposed method, along with the MMN-based method, achieved approximately 80% average classification accuracy by a combination of support vector machine (SVM) and common spatial pattern (CSP). Potentially, these methods could serve as a pre-screening tool to make speech discrimination assessment more accessible, particularly in areas with a shortage of audiologists

    Real-time EEGbased happiness detection system,”The

    No full text
    We propose to use real-time EEG signal to classify happy and unhappy emotions elicited by pictures and classical music. We use PSD as a feature and SVM as a classifier. The average accuracies of subject-dependent model and subject-independent model are approximately 75.62% and 65.12%, respectively. Considering each pair of channels, temporal pair of channels (T7 and T8) gives a better result than the other area. Considering different frequency bands, high-frequency bands (Beta and Gamma) give a better result than low-frequency bands. Considering different time durations for emotion elicitation, that result from 30 seconds does not have significant difference compared with the result from 60 seconds. From all of these results, we implement real-time EEG-based happiness detection system using only one pair of channels. Furthermore, we develop games based on the happiness detection system to help user recognize and control the happiness
    corecore